perm filename CBCL[S81,JMC]1 blob
sn#577231 filedate 1981-04-09 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 ANSWERS TO STAN ROSENSCHEIN'S QUESTIONS + SUPPLEMENT
C00011 ENDMK
Cā;
ANSWERS TO STAN ROSENSCHEIN'S QUESTIONS + SUPPLEMENT
1. Predicate calculus (presumably with functions) is ok for
the communication language. The LISPish syntax given in my memo
is still desirable since the purpose is machine-machine communication.
2. Q: What soes it mean to be talking about the same thing? Is this a
semantic (e.g. model-theoretic) notion?
A: Yes, it's semantic, assuming you are referring to
the buyer and seller talking about the same thing. The problem of
qualifying a reference enough so that they are talking about
the same thing is central in human business communication, and much
of the back-and-forth is about that. Notice that negotiators
often have only a tenuous knowledge of the objects they are
dickering about. If SRI buys a D0 from xerox, this will be done with
an incomplete knowledge of exactly what constitutes a D0, and
the D0 may be somewhat different from the one on display. In general,
however, buying and selling is discussed (usually) as though
the objects were natural kinds.
3. Q: What type of conventions could/must be observed for two P-C
speaking machiees to know they are talking about the same thing?
A: This cannot be assured by conventions, if I understand
what is meant. After some dialog about (say) what color pencils,
both sides think they are referring to the same thing. Often it
can then be tied down by referring to a model number or stock number.
Perhaps one can say that dialog often involves approximating
some continuously variable aspect of the world by a discrete model.
4. Q: Is there a "universal protocol" for shipping cognitively-oriented
(belief and desire) self-descriptions among communicating machines?
A: I'm not sure I understand the question. It seems at first glance
that shipping such concepts is not much more problematical than
referring to pencils.
5. Q: Is an initial shared "subtheory" required? How much of a
description of the world (as opposed to a theory of cognitive
agents) need be in such a shared subtheory?
A: It seems to me that a theory of the world is more required
than any theory of cognitive agents. Consider a company
controller dickering with an IBM salesman about the purchase of
a computer. The salesman says that a power-conditioning unit is
required in addition to the CPU, etc. Both can be vague about what
it actually is. The controller's question is "What else will cost
me money?". The ability to introduce new entities is required for
a full ability to negotiate. In order to make a valid contract,
model numbers are often sufficient, provided they refer either implicitly
or explicitly to the common practices of the industry.
6. Q: How much proliferation of speech-act types is really required?
For instance, do we need to distinguish "imperative force" of an
utterance from "causing the hearer to believe the speaker desires
that he take some action" (i.e., a special case of "informing", which
would be handled by a single, general-purpose "inform" protocol?
A: I think some performatives will be required, and should be set off
syntactically in order to give legal force to the transactions.
Thus a company can commit itself to pay for what is ordered in a
prescribed manner by its purchasing computer program. The
alternative is that it be a tort to lie about one's intentions.
This is too murky to treat generally. Thus if a computer
says, "If 300 gross of pencils are delivered to me by January 1, I
intend to print a check for $1000 made out to your company
and mail it", this seems harder to treat legally in a general
way than "I hereby offer (on authority of xyz) to buy 300 gross
of pencils for $1000 for delivery by January 1".
7. Q: Can we come up with an interesting scenario to actually
implement? Can we do this quickly?
A: In an ideal world, SRI would devise the CBCL and get paid
by its users, perhaps without ever writing a program using the language.
I'm not sure what is meant by a scenario, and can think of
two kinds. First two programs written here communicate in CBCL
playing some kind of Monopoly game. Second, SRI writes a program
for getting reports to people, and the program also
keeps track of SRI's own reports,
asks similar programs over the ARPAnet about reports of
other labs, answers inquiries and accepts orders.
Programs written elsewhere communicate with the SRI program in CBCL.
Notes from April 9 meeting of CBCL seminar.
1. Owing is a non-cognitive abstract phenomenon that is central to
CBCL. Obligation is a more general phenomenon that is also central.
2. The most important cognitively oriented phenomena may be facts
about information. For example, a computer may say, "That information
is in our catalog which will be available tomorrow?".
Referring to information in a file or document is a weak form of an
assertion of knowing. Belief may have to be treated but knowledge
is the first approximation.
3. Performatives can be treated as a kind of action just like delivery
of an item. I.e. to utter a string that legally binds a company is
an action.
4. It was agreed that it is worthwhile and feasible to find examples
in which the computers belong to different naval organizations.
The present goal is a toy system with realistic naval exchanges of
information and orders.
John McCarthy